An Inexact Accelerated Proximal Gradient Method for Large Scale Linearly Constrained Convex SDP

نویسندگان

  • Kaifeng Jiang
  • Defeng Sun
  • Kim-Chuan Toh
چکیده

The accelerated proximal gradient (APG) method, first proposed by Nesterov for minimizing smooth convex functions, later extended by Beck and Teboulle to composite convex objective functions, and studied in a unifying manner by Tseng, has proven to be highly efficient in solving some classes of large scale structured convex optimization (possibly nonsmooth) problems, including nuclear norm minimization problems in matrix completion and l1 minimization problems in compressed sensing. The method has superior worst-case iteration complexity over the classical projected gradient method and usually has good practical performance on problems with appropriate structures. In this paper, we extend the APG method to the inexact setting, where the subproblem in each iteration is solved only approximately, and show that it enjoys the same worst-case iteration complexity as the exact counterpart if the subproblems are progressively solved to sufficient accuracy. We apply our inexact APG method to solve large scale convex quadratic semidefinite programming (QSDP) problems of the form min{ 1 2 〈x, Q(x)〉 + 〈c, x〉 | A(x) = b, x 0}, where Q,A are given linear maps and b, c are given data. The subproblem in each iteration is solved by a semismooth Newton-CG (SSNCG) method with warm-start using the iterate from the previous iteration. Our APG-SSNCG method is demonstrated to be efficient for QSDP problems whose positive semidefinite linear maps Q are highly ill-conditioned or rank deficient.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Spectral Norm Regularization of Orthonormal Representations for Graph Transduction

Recent literature [1] suggests that embedding a graph on an unit sphere leads to better generalization for graph transduction. However, the choice of optimal embedding and an efficient algorithm to compute the same remains open. In this paper, we show that orthonormal representations, a class of unit-sphere graph embeddings are PAC learnable. Existing PAC-based analysis do not apply as the VC d...

متن کامل

Inexact accelerated augmented Lagrangian methods

The augmented Lagrangian method is a popular method for solving linearly constrained convex minimization problem and has been used many applications. In recently, the accelerated version of augmented Lagrangian method was developed. The augmented Lagrangian method has the subproblem and dose not have the closed form solution in general. In this talk, we propose the inexact version of accelerate...

متن کامل

Convex Optimization of Low Dimensional Euclidean Distances Convex Optimization Learning of Faithful Euclidean Distance Representations in Nonlinear Dimensionality Reduction

Classical multidimensional scaling only works well when the noisy distances observed in a high dimensional space can be faithfully represented by Euclidean distances in a low dimensional space. Advanced models such as Maximum Variance Unfolding (MVU) and Minimum Volume Embedding (MVE) use Semi-Definite Programming (SDP) to reconstruct such faithful representations. While those SDP models are ca...

متن کامل

Accelerated Bregman Method for Linearly Constrained ℓ1-ℓ2 Minimization

We consider the linearly constrained `1-`2 minimization and propose the accelerated Bregman method for solving this minimization problem. The proposed method is based on the extrapolation technique, which is used in accelerated proximal gradient methods studied by Nesterov, Nemirovski, and others, and the equivalence between the Bregman method and the augmented Lagrangian method. O( 1 k2 ) conv...

متن کامل

Inexact Proximal Gradient Methods for Non-convex and Non-smooth Optimization

Non-convex and non-smooth optimization plays an important role in machine learning. Proximal gradient method is one of the most important methods for solving the nonconvex and non-smooth problems, where a proximal operator need to be solved exactly for each step. However, in a lot of problems the proximal operator does not have an analytic solution, or is expensive to obtain an exact solution. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 22  شماره 

صفحات  -

تاریخ انتشار 2012